speaker detail

Bhaskarjit Sarmah

Vice President

company logo

Bhaskarjit Sarmah, Vice President and Data Scientist at BlackRock leverages over 10 years of data science expertise across diverse industries. At BlackRock, he pioneered machine learning solutions to bolster liquidity risk analytics, uncover pricing opportunities in securities lending, and develop market regime change detection systems using network science. Bhaskarjit's proficiency extends to natural language processing and computer vision, enabling him to extract insights from unstructured data and deliver actionable reports. Committed to empowering investors and fostering superior financial outcomes, he embodies a strategic fusion of data-driven innovation and domain knowledge in the world's largest asset management firm.

Welcome to the cutting edge of financial innovation, where the intersection of artificial intelligence and finance is transforming the landscape of opportunity. In this workshop, "GenAI for Finance: Applications & Responsible Use," we invite you to explore the fusion of technological artistry with financial pragmatism. In this workshop, we will build GenAI-based applications to extract insights from financial reports—such as earnings and annual reports—enabling you to make more informed, data-driven investment decisions confidently. We will also build GenAI-based agents to connect to open-source APIs to extract financial data and create trading signals.

Through our immersive modules, you'll delve into the intricacies of Retrieval-Augmented Generation (RAGs) and AI Agents, learning to harness these tools skillfully to generate insights from financial documents. This journey, however, isn't solely about mastering technical skills; it's about striking a balance between innovation and responsibility. As we navigate the high-stakes realm of algorithm-driven decisions, it's crucial to approach these choices with care and ethical consideration. We'll also explore how to use language models responsibly while generating insights, ensuring that our advancements in AI are both effective and conscientious.

Read More

Large Language Models (LLMs) have showcased remarkable proficiency in tackling Natural Language Processing (NLP) tasks efficiently, significantly reducing time-to-market compared to traditional NLP pipelines. However, upon deployment, LLM applications encounter challenges concerning hallucinations, safety, security, and interpretability. 

With many countries recently introducing guidelines on responsible AI application usage, it becomes imperative to comprehend the principles of constructing and deploying LLM applications responsibly. This hands-on session aims to delve into these critical concepts, offering insights into developing and deploying LLM models alongside implementing essential guardrails for their responsible usage.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details